# Low perplexity quantization
Llama 3.1 1 Million Cxt Dark Planet 8B GGUF
Apache-2.0
An 8B-parameter model based on Llama 3.1, supporting 1 million token context length, optimized for creative writing and role-playing with high stability and low perplexity characteristics.
Large Language Model English
L
DavidAU
882
2
L3 8B Stheno V3.3 32K Ultra NEO V1 IMATRIX GGUF
Apache-2.0
An 8B parameter large language model based on NEO CLASS technology, with a 32k context window and enhanced instruction-following ability
Large Language Model English
L
DavidAU
1,086
11
Featured Recommended AI Models